0%

(CVPR 2017) Image Super-Resolution via Deep Recursive Residual Network

Tai Y , Yang J , Liu X . Image Super-Resolution via Deep Recursive Residual Network[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, 2017.



1. Overview


In this paper, it proposes Deep Recursive Residual Network (DRNN)

  • adopt residual learning (local + global)
  • recursive to control the model parameters while increasing depth
  • 2x, 6x, 14x fewer parameters than VDSR, DRCN and RED30


1.1.1. VDSR

  • high LR to accelerate the convergence speed
  • residual learning + adjustable gradient clipping to sovel gradient explosion problem

1.1.2. DRCN

  • chain strucuture
  • recursive-supervision and skip-connection to mitigate the difficulty of training

1.2. Novelties

  • both global and local residual learning
  • recursive learning. Increase the depth without adding parameters

1.3. DRRN

1.3.1. Pre-Activation



1.3.2. Recursive



1.3.3. Architecture



1.3.4. Parameters

  • U. the number of residual unit in a recursive block
  • B. the number of recursive block
    • when U=0, DRRN becomes VDSR
    • depth of DRRN


1.3.5. Loss Function





2. Experiments


2.1. Dataset

  • Set5
  • Set14
  • BSD100
  • Urban100

2.2. Augmentation

  • flipping
  • rotation
  • scale. x2, x3, x4
  • training sample. 31x31 patches with stride of 21

2.3. Details

  • adjustable gradient clipping



  • γ. current learning rate

  • Θ = 0.01. gradient clipping parameter
  • DRRN with d=20 takes 4 days with 2 Titan X GPUs
  • Metric. PSNR, SSIM, IFCs
  • 0.25s per 288x288 image on a Titan X GPU

2.4. Study of B and U



2.5. Comparison




2.6. Discussion



  • DRRN_NS. no sharing weights
  • DRRN_C. chain strucuture

  • LRL improves VDSR at all depth

  • weight sharing DRRN (recursive strategy) better than without sharing, less prone to overfitting